The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
与常规知识蒸馏(KD)不同,自我KD允许网络在没有额外网络的任何指导的情况下向自身学习知识。本文提议从图像混合物(Mixskd)执行自我KD,将这两种技术集成到统一的框架中。 Mixskd相互蒸馏以图形和概率分布在随机的原始图像和它们的混合图像之间以有意义的方式。因此,它通过对混合图像进行监督信号进行建模来指导网络学习跨图像知识。此外,我们通过汇总多阶段功能图来构建一个自学老师网络,以提供软标签以监督骨干分类器,从而进一步提高自我增强的功效。图像分类和转移学习到对象检测和语义分割的实验表明,混合物KD优于其他最先进的自我KD和数据增强方法。该代码可在https://github.com/winycg/self-kd-lib上找到。
translated by 谷歌翻译
本文着重于通过分散网络的在线内核学习。网络中的每个代理都会在本地接收连续流数据,并协同工作以学习一个非线性预测函数,该功能在复制的内核希尔伯特空间中相对于所有代理的总瞬时成本而言是最佳的。为了规避传统在线内核学习中维度问题的诅咒,我们利用随机功能(RF)映射将非参数内核学习问题转换为RF空间中的固定长度参数。然后,我们建议通过线性化ADMM(ODKLA)有效地解决在线分散的内核内核学习问题,提出一个名为在线分散内核学习的新颖学习框架。为了进一步提高沟通效率,我们在通信阶段添加了量化和审查策略,并开发了量化和通信的ODKLA(QC-ODKLA)算法。从理论上讲,我们证明了Odkla和Qc-odkla都可以在$ t $ time插槽上实现最佳的Sublinear后悔$ \ Mathcal {O}(\ sqrt {t})$。通过数值实验,我们评估了所提出方法的学习效率,沟通和计算效率。
translated by 谷歌翻译
联合学习通过融合来自本地节点的协作模型来从分散的数据中学习。然而,FedAVG平均的传统基于坐标的模型忽略了每个参数编码的随机信息,并且可能遭受结构特征未对准。在这项工作中,我们提出了Fed2,一个功能对齐的联合学习框架来解决这个问题,通过在协作模型上建立一个坚定的结构特征对齐来解决这个问题。 FED2由两种主要设计组成:首先,我们设计了一个面向功能的模型结构适应方法,以确保不同神经网络结构中的显式功能分配。将结构适应应用于协作模型,可以在非常早期的训练阶段初始化具有类似特征信息的匹配结构。在联合学习过程中,我们提出了一个特征配对的平均方案,以保证对齐的特征分布,并在IID或非IID方案下维护没有特征融合冲突。最终,FED2可以在广泛的同源和异构环境下有效地提高联合学习收敛性能,提供出色的收敛速度,准确性和计算/通信效率。
translated by 谷歌翻译
近年来,神经网络在各个领域中表现出强大的力量,它也带来了越来越多的安全威胁。基于神经网络模型的STEGOMALWARE是代表性的。以前的研究初步证明通过突出神经网络模型中的恶意软件来启动恶意攻击的可行性。然而,现有的作品没有表明,由于恶意软件嵌入率低,模型性能降低以及额外的努力,这种新兴威胁在现实世界攻击中是实际的攻击。因此,我们预测一个称为evilmodel的改进的斯佩塔科。在分析神经网络模型的结构的基础上,我们将二进制形成恶意软件作为其参数嵌入神经网络模型,并提出了三种新的恶意软件嵌入技术,即MSB保留,快速替换和半替换。通过结婚19个恶意软件样本和10个流行的神经网络模型,我们构建了550个恶意软件嵌入式模型,并在想象中数据集中分析了这些模型的性能。实验结果表明,半取代几乎完美地表现出,恶意软件嵌入率为48.52%,没有模型性能下降或额外的努力。考虑到一系列因素,我们提出了一种定量算法来评估不同的嵌入方法。评估结果表明,邪恶的模型与经典的斯托图尼特有多高。此外,我们开展案例研究,以触发真实世界的情景中的邪恶模型。要深入了解所提出的恶意软件嵌入技术,我们还研究了神经网络结构,层和参数大小对恶意软件嵌入容量和嵌入式模型精度的影响。我们还提供了一些可能的对策来捍卫邪恶的模型。我们希望这项工作能够全面了解这种新的AI动力威胁,并建议提前辩护。
translated by 谷歌翻译
命令和控制(C&C)在攻击中很重要。它将命令从攻击者传输到受损的主机中的恶意软件。目前,一些攻击者在C&C任务中使用在线社交网络(OSN)。 OSN的C&C中有两个主要问题。首先,恶意软件找到攻击者的过程是可逆的。如果防御者分析了恶意软件样本,则在发布命令之前将暴露攻击者。其次,以普通或加密形式的命令被OSN视为异常内容,这会引起异常并触发攻击者的限制。防御者暴露后可以限制攻击者。在这项工作中,我们建议在OSN上使用AI驱动的C&C DEEPC2来解决这些问题。对于可逆的硬编码,恶意软件使用神经网络模型找到了攻击者。攻击者的头像被转换为​​一批特征向量,并且防御者无法使用模型和特征向量提前恢复头像。为了求解OSN上的异常内容,哈希碰撞和文本数据扩展用于将命令嵌入正常内容中。 Twitter上的实验表明,可以有效地生成命令包裹的推文。恶意软件可以在OSN上秘密地找到攻击者。安全分析表明,很难提前恢复攻击者的标识符。
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
In contrast to the control-theoretic methods, the lack of stability guarantee remains a significant problem for model-free reinforcement learning (RL) methods. Jointly learning a policy and a Lyapunov function has recently become a promising approach to ensuring the whole system with a stability guarantee. However, the classical Lyapunov constraints researchers introduced cannot stabilize the system during the sampling-based optimization. Therefore, we propose the Adaptive Stability Certification (ASC), making the system reach sampling-based stability. Because the ASC condition can search for the optimal policy heuristically, we design the Adaptive Lyapunov-based Actor-Critic (ALAC) algorithm based on the ASC condition. Meanwhile, our algorithm avoids the optimization problem that a variety of constraints are coupled into the objective in current approaches. When evaluated on ten robotic tasks, our method achieves lower accumulated cost and fewer stability constraint violations than previous studies.
translated by 谷歌翻译
A storyboard is a roadmap for video creation which consists of shot-by-shot images to visualize key plots in a text synopsis. Creating video storyboards however remains challenging which not only requires association between high-level texts and images, but also demands for long-term reasoning to make transitions smooth across shots. In this paper, we propose a new task called Text synopsis to Video Storyboard (TeViS) which aims to retrieve an ordered sequence of images to visualize the text synopsis. We construct a MovieNet-TeViS benchmark based on the public MovieNet dataset. It contains 10K text synopses each paired with keyframes that are manually selected from corresponding movies by considering both relevance and cinematic coherence. We also present an encoder-decoder baseline for the task. The model uses a pretrained vision-and-language model to improve high-level text-image matching. To improve coherence in long-term shots, we further propose to pre-train the decoder on large-scale movie frames without text. Experimental results demonstrate that our proposed model significantly outperforms other models to create text-relevant and coherent storyboards. Nevertheless, there is still a large gap compared to human performance suggesting room for promising future work.
translated by 谷歌翻译
Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM). Existing work has combined these in simple "retrieve-then-read" pipelines in which the RM retrieves passages that are inserted into the LM prompt. To begin to fully realize the potential of frozen LMs and RMs, we propose Demonstrate-Search-Predict (DSP), a framework that relies on passing natural language texts in sophisticated pipelines between an LM and an RM. DSP can express high-level programs that bootstrap pipeline-aware demonstrations, search for relevant passages, and generate grounded predictions, systematically breaking down problems into small transformations that the LM and RM can handle more reliably. We have written novel DSP programs for answering questions in open-domain, multi-hop, and conversational settings, establishing in early evaluations new state-of-the-art in-context learning results and delivering 37-200%, 8-40%, and 80-290% relative gains against vanilla LMs, a standard retrieve-then-read pipeline, and a contemporaneous self-ask pipeline, respectively.
translated by 谷歌翻译